VMware Cloud Community
kaymm2
Contributor
Contributor

NFS mounted on ESXi

I have a puzzling problem with NFS on my NAS (Dlink DNS-323). I was able to add it as storage to my ESXi host. The problem I have is that the files I create within that store doesn't get listed on the ESXi host in the filebrowser. I've also SSH'd into host and it doesn't list the files too. However if you connect with any other NFS client, the files shows up. As an example I created a folder called "New Folder" in NFS share from the ESXi server, but it doesn't show up. Yet when I use my Mac to list the contents of that folder, it has no issues. I have many other files in that NFS share, esxi, as well but the ESXi host does not see them.

What could it be?

0 Kudos
20 Replies
weinstein5
Immortal
Immortal

Welcome to the forums - It possibly could be the configuration of you NFS share - you need to make sure your ESX host has root access by setting the no_root_squash parameter on the NFS share -

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful

If you find this or any other answer useful please consider awarding points by marking the answer correct or helpful
0 Kudos
kaymm2
Contributor
Contributor

Thanks for the reply. I checked my export file and it is as so:

/mnt/HD_a2 192.168.1.0/24(rw,no_root_squash)

/mnt/HD_a4 192.168.1.0/24(rw,no_root_squash)

/mnt/web_page 192.168.1.0/24(rw,no_root_squash)

/mnt/HD_b4 192.168.1.0/24(rw,no_root_squash)

/mnt/HD_a2/esxi 192.168.1.0/24(rw,no_root_squash,uid=root,gid=root)

I can upload files, create folders etc from the datastore browser into the NFS share, however it never gets listed. And when I use another NFS client, it displays.

0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

Thanks for the reply. I checked my export file and it is as so:

/mnt/HD_a2 192.168.1.0/24(rw,no_root_squash)

/mnt/HD_a4 192.168.1.0/24(rw,no_root_squash)

/mnt/web_page 192.168.1.0/24(rw,no_root_squash)

/mnt/HD_b4 192.168.1.0/24(rw,no_root_squash)

/mnt/HD_a2/esxi 192.168.1.0/24(rw,no_root_squash,uid=root,gid=root)

I can upload files, create folders etc from the datastore browser into the NFS share, however it never gets listed. And when I use another NFS client, it displays.

What is the uid=root and gid=root in exports supposed to do? I don't even think they are valid export options. anonuid and anongid are but since you are only dealing with one user, root, here there is no need to use those either. Just use no_root_squash and make sure the esxi directory has both root as the owner and group.

0 Kudos
Karunakar
Hot Shot
Hot Shot

Hi,

Can you retry as below.

/mnt/HD_a2/esxi 192.168.1.0/24(rw,sync,no_root_squash)

This should sync the folders from the source to destination.

this should help.

-Karunakar

0 Kudos
john_tch
Contributor
Contributor

I encountered the same issue with my dlink dns323. Did you manage to solve it?

Thanks in advance.

0 Kudos
nick_couchman
Immortal
Immortal

sync is a default option on Linux NFS servers. I'm not sure what the Dlink NAS runs, though.

0 Kudos
nick_couchman
Immortal
Immortal

When you create these other files and folders on the NAS, what uid are they being created under - root (0) or another one, or both?

0 Kudos
john_tch
Contributor
Contributor

I think the issue I am facing now is the version of nfs service being embedded in dns323. It is a user space nfs instead of kernel space. I found a work around from a mod site but I need to do some soldering job (which I am trying to avoid) in order to archieve my goal. Any idea which is more feasible is very much appreciated.

0 Kudos
kaymm2
Contributor
Contributor

I also think it's the NFS in the DNS, I have since given up on it. I'm just using running an NFS service on my Mac and using it with ESX. I just wanna be able to copy it somewhere for me to backup and that works for now.

0 Kudos
ShahidSheikh
Enthusiast
Enthusiast

I think it has more to do with NFS v3 vs NFS v2 than user space vs kernel implementation of NFS. You need NFS v3 with large file support for ESX/ESXi.

Even if you were able to get NFS v3 running on the DLink box, it would be too slow to act as a datastore for VMware. You may be better off using a low power machine running something like OpenFiler.

0 Kudos
rhuddusa
Contributor
Contributor

I've seen the same behavior on a DroboShare with the userland NFS drobo app. Extremely frustrating !!!

Was able to mount and upload / create folders on ESXi but they would not show up in the datastore, but they are created on the NFS server.

If have anyone has any ideas, I'm happy to provide a test bed.

0 Kudos
rhuddusa
Contributor
Contributor

Here's what I have dug up in my research:

VMware ESX and ESXi need this "readdirplus" procedure from NFS3:

and apparently:

"UNFS3 supports all NFSv3 procedures with the exception of the READDIRPLUS procedure. It tries to provide as much information to NFS clients as possible, within the limits possible from user-space."

so it appears as though my NFS server (the UserLand NFS drobo app) doesn't appear to support this proceduce.

I'd like to make a suggestion that VMware support user space NFS. all of us who are hacking around at home and who are using an off the shelf cheap NAS solution with a hacked user space NFS solution are out of luck.

0 Kudos
mondogaraj
Contributor
Contributor

I'm currently having problems with the Droboshare and Userland NFS drobo app as well. I can't mount the Drobo from my Linux machine. Is there some trick for doing this? I'm not finding the info i need on the Drobo forums or elsewhere on the Internet. I'd really appreciate if you could explain how to get this working...

0 Kudos
rhuddusa
Contributor
Contributor

Are you trying mount the DroboShare NAS on your ESXi host or on the Linux guest which is running on the ESXi host?

0 Kudos
mondogaraj
Contributor
Contributor

Right now I'm just trying to mount the Drobo from a Linux machine (CentOS 5) running VMware Server 2.0. If/when I get this working, the next thing to try will be mounting the Drobo from an ESXi box.

0 Kudos
mondogaraj
Contributor
Contributor

Actually, I just got this to work!

Commenting out the two default lines in the exports file seems to have done the trick.

#/mnt/DroboShares/Drobo (rw,no_root_squash)

#/mnt/DroboShares/Drobo1 (rw,no_root_squash)

/mnt/DroboShares/MyDrobo/test (rw,no_root_squash)

I suppose this is necessary since the directories in the commented lines do not exist?

0 Kudos
rhuddusa
Contributor
Contributor

You are correct, by default, the drobo names the first volume Drobo, and then when you add more space than will fit in your formatted volume, I guess the next default volume name is Drobo1.

I assume when you formatted your drobo, you named your volume "MyDrobo". Thus, your volume name needed to be changed in the script.

0 Kudos
blueivy
Contributor
Contributor

I just wanted to add to this thread.

After many hours todayI have finally given up on the DroboPro. It really is an enormous let down as a device. Few features compared to competitors and very expensive.

I eventually managed to get the DroboShare (I have a DroboPro and a DroboShare connected to it) connected to a straightforward Linux machine.You obviously have to install the NFS DroboApp which has zero documentation (okay it's straightforward once you figure it out - you need to edit the exports file if anything has changed) and once that is installed you have a sort of working NFS system.

The drawbacks of thisNFS system is that you can't actually browse the shares from the ESXi box (probably due to the restriction mentioned earlier in this thread). You can copy files to the shares and work with them,you just can't see anythingas it always comes up empty. Only way to see them it browse for a Windows (or Mac) computer.

Using the ghettoVCB script to back up the VMS's on the box just doesn't work.It's not the scripts fault but the Drobo. Using the simple vmkfstools clonevirtual command and you get the same error after about 90 seconds. The error is always:

Failed to clone disk: Input/output error (327689)

If you use the vmkfstools clonevirtual command to clone the VM to a VMFS datastore (in my case the same store it is in) then it works fine. So the problem is the NFS on the Drobo. It just does not work.

I know the NFS is a separate app from the Drobo and it's not supported blah blah blah. However every NAS box I know has NFS on it as standard. This £450 lump of plastic (as it has now become) doesn't.

Tomorrow, the Drobo is being retired (after a month of being on and really just 1 day of use). I will be putting the drives into an old PC and installing either OpenFiler or FreeNAS (I've used FreeNAS in the past and has always worked a treat) and backing up ESXi that way.

I would like to say what I really think of the Drobo but this post would be deleted. My advice to ANYBODY thinking of getting one - don't. Go for something that natively supports NFS. Don't do what I did and look at the DroboPro and then buy a Drobo thinking it was just a smaller version. It's not - they are totally different.

If you really want a Drobo, check out eBay tomorrow and you can have mine.

0 Kudos
quocx
Contributor
Contributor

Did you encounter issue with DroboPro or was it lack of feature that you concluded DroboPro was not an option for your VMware environment?

0 Kudos